Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-task learning model for charge prediction with action words
Xiao GUO, Yanping CHEN, Ruixue TANG, Ruizhang HUANG, Yongbin QIN
Journal of Computer Applications    2024, 44 (1): 159-166.   DOI: 10.11772/j.issn.1001-9081.2023010029
Abstract149)   HTML3)    PDF (2318KB)(37)       Save

With the application of artificial intelligence technology in the judicial field, charge prediction based on case description has become an important research content. It aims at predicting the charges according to the case description. The terms of case contents are professional, and the description is concise and rigorous. However, the existing methods often rely on text features, but ignore the difference of relevant elements and lack effective utilization of elements of action words in diverse cases. To solve the above problems, a multi-task learning model of charge prediction based on action words was proposed. Firstly, the spans of action words were generated by boundary identifier, and then the core contents of the case were extracted. Secondly, the subordinate charge was predicted by constructing the structure features of action words. Finally, identification of action words and charge prediction were uniformly modeled, which enhanced the generalization of the model by sharing parameters. A multi-task dataset with action word identification and charge prediction was constructed for model verification. The experimental results show that the proposed model achieves the F value of 83.27% for action word identification task, and the F value of 84.29% for charge prediction task; compared with BERT-CNN, the F value respectively increases by 0.57% and 2.61%, which verifies the advantage of the proposed model in identification of action words and charge prediction.

Table and Figures | Reference | Related Articles | Metrics
Scholar fine-grained information extraction method fused with local semantic features
Yuelin TIAN, Ruizhang HUANG, Lina REN
Journal of Computer Applications    2023, 43 (9): 2707-2714.   DOI: 10.11772/j.issn.1001-9081.2022091407
Abstract146)   HTML12)    PDF (1296KB)(95)       Save

It is importantly used in the fields such as creation of large-scale professional talent pools to extract scholar fine-grained information such as scholar’s research directions, education experience from scholar homepages. To address the problem that the existing scholar fine-grained information extraction methods cannot use contextual semantic associations effectively, a scholar fine-grained information extraction method incorporating local semantic features was proposed to extract fine-grained information from scholar homepages by using semantic associations in the local text. Firstly, general semantic representation was learned by the full-word mask Chinese pre-trained model RoBERTa-wwm-ext. Subsequently, the representation vector of the target sentence, as well as its locally adjacent text representation vector from the general semantic embeddings, were jointly fed into a CNN (Convolutional Neural Network) to accomplish local semantic fusion, thereby obtaining a higher-dimensional representation vector for the target sentence. Finally, the representation vector of the target sentence was mapped from the high-dimensional space to the low-dimensional labeling space to extract the fine-grained information from the scholar homepage. Experimental results show that the micro-average F1 score of the scholar fine-grained information extraction method fusing local semantic features reaches 93.43%, which is higher than that of RoBERTa-wwm-ext-TextCNN method without fusing local semantic by 8.60 percentage points, which verifies the effectiveness of the proposed method on the scholar fine-grained information extraction task.

Table and Figures | Reference | Related Articles | Metrics
Hierarchical storyline generation method for hot news events
Dong LIU, Chuan LIN, Lina REN, Ruizhang HUANG
Journal of Computer Applications    2023, 43 (8): 2376-2381.   DOI: 10.11772/j.issn.1001-9081.2022091377
Abstract405)   HTML20)    PDF (1333KB)(265)       Save

The development of hot news events is very rich, and each stage of the development has its own unique narrative. With the development of events, a trend of hierarchical storyline evolution is presented. Aiming at the problem of poor interpretability and insufficient hierarchy of storyline in the existing storyline generation methods, a Hierarchical Storyline Generation Method (HSGM) for hot news events was proposed. First, an improved hotword algorithm was used to select the main seed events to construct the trunk. Second, the hotwords of branch events were selected to enhance the branch interpretability. Third, in the branch, a storyline coherence selection strategy fusing hotword relevance and dynamic time penalty was used to enhance the connection of parent-child events, so as to build hierarchical hotwords, and then a multi-level storyline was built. In addition, considering the incubation period of hot news events, a hatchery was added during the storyline construction process to solve the problem of neglecting the initial events due to insufficient hotness. Experimental results on two real self-constructed datasets show that in the event tracking process, compared with the methods based on singlePass and k-means respectively, HSGM has the F score increased by 4.51% and 6.41%, 20.71% and 13.01% respectively; in the storyline construction process, HSGM performs well in accuracy, comprehensibility and integrity on two self-constructed datasets compared with Story Forest and Story Graph.

Table and Figures | Reference | Related Articles | Metrics
DDDC: deep dynamic document clustering model
Hui LU, Ruizhang HUANG, Jingjing XUE, Lina REN, Chuan LIN
Journal of Computer Applications    2023, 43 (8): 2370-2375.   DOI: 10.11772/j.issn.1001-9081.2022091354
Abstract249)   HTML11)    PDF (1962KB)(118)       Save

The rapid development of Internet leads to the explosive growth of news data. How to capture the topic evolution process of current popular events from massive news data has become a hot research topic in the field of document analysis. However, the commonly used traditional dynamic clustering models are inflexible and inefficient when dealing with large-scale datasets, while the existing deep document clustering models lack a general method to capture the topic evolution process of time series data. To address these problems, a Deep Dynamic Document Clustering (DDDC) model was designed. In this model, based on the existing deep variational inference algorithms, the topic distributions incorporating the content of previous time slices on different time slices were captured, and the evolution process of event topics was captured from these distributions through clustering. Experimental results on real news datasets show that compared with Dynamic Topic Model (DTM), Variational Deep Embedding (VaDE) and other algorithms, DDDC model has the clustering accuracy and Normalized Mutual Information (NMI) improved by at least 4 percentage points averagely and at least 3 percentage points respectively in each time slice on different datasets, verifying the effectiveness of DDDC model.

Table and Figures | Reference | Related Articles | Metrics
Structured deep text clustering model based on multi-layer semantic fusion
Shengwei MA, Ruizhang HUANG, Lina REN, Chuan LIN
Journal of Computer Applications    2023, 43 (8): 2364-2369.   DOI: 10.11772/j.issn.1001-9081.2022091356
Abstract268)   HTML14)    PDF (1642KB)(183)       Save

In recent years, due to the advantages of the structural information of Graph Neural Network (GNN) in machine learning, people have begun to combine GNN into deep text clustering. The current deep text clustering algorithm combined with GNN ignores the important role of the decoder on semantic complementation in the fusion of text semantic information, resulting in the lack of semantic information in the data generation part. In response to the above problem, a Structured Deep text Clustering Model based on multi-layer Semantic fusion (SDCMS) was proposed. In this model, a GNN was utilized to integrate structural information into the decoder, the representation of text data was enhanced through layer-by-layer semantic complement, and better network parameters were obtained through triple self-supervision mechanism.Results of experiments carried out on 5 real datasets Citeseer, Acm, Reutuers, Dblp and Abstract show that compared with the current optimal Attention-driven Graph Clustering Network (AGCN) model, SDCMS in accuracy, Normalized Mutual Information (NMI ) and Average Rand Index (ARI) has increased by at most 5.853%, 9.922% and 8.142%.

Table and Figures | Reference | Related Articles | Metrics
Recognition of sentencing circumstances in adjudication documents based on abductive learning
Jinye LI, Ruizhang HUANG, Yongbin QIN, Yanping CHEN, Xiaoyu TIAN
Journal of Computer Applications    2022, 42 (6): 1802-1807.   DOI: 10.11772/j.issn.1001-9081.2021091748
Abstract419)   HTML14)    PDF (1407KB)(104)       Save

Aiming at the problem of poor recognition of sentencing circumstances in adjudication documents caused by the lack of labeled data, low quality of labeling and existence of strong logicality in judicial field, a sentencing circumstance recognition model based on abductive learning named ABL-CON (ABductive Learning in CONfidence) was proposed. Firstly, combining with neural network and domain logic inference, through the semi-supervised method, a confidence learning method was used to characterize the confidence of circumstance recognition. Then, the illogical error circumstances generated by neural network of the unlabeled data were corrected, and the recognition model was retrained to improve the recognition accuracy. Experimental results on the self-constructed judicial dataset show that the ABL-CON model using 50% labeled data and 50% unlabeled data achieves 90.35% and 90.58% in Macro_F1 and Micro_F1, respectively, which is better than BERT (Bidirectional Encoder Representations from Transformers) and SS-ABL (Semi-Supervised ABductive Learning) under the same conditions, and also surpasses the BERT model using 100% labeled data. The ABL-CON model can effectively improve the logical rationality of labels as well as the recognition ability of labels by correcting illogical labels through logical abductive correctness.

Table and Figures | Reference | Related Articles | Metrics
Relation extraction method based on entity boundary combination
Hao LI, Yanping CHEN, Ruixue TANG, Ruizhang HUANG, Yongbin QIN, Guorong WANG, Xi TAN
Journal of Computer Applications    2022, 42 (6): 1796-1801.   DOI: 10.11772/j.issn.1001-9081.2021091747
Abstract247)   HTML10)    PDF (1005KB)(85)       Save

Relation extraction aims to extract the semantic relationships between entities from the text. As the upper-level task of relation extraction, entity recognition will generate errors and spread them to relation extraction, resulting in cascading errors. Compared with entities, entity boundaries have small granularity and ambiguity, making them easier to recognize. Therefore, a relationship extraction method based on entity boundary combination was proposed to realize relation extraction by skipping the entity and combining the entity boundaries in pairs. Since the boundary performance is higher than the entity performance, the problem of error propagation was alleviated; in addition, the performance was further improved by adding the type features and location features of entities through the feature combination method, which reduced the impact caused by error propagation. Experimental results on ACE 2005 English dataset show that the proposed method outperforms the table-sequence encoders method by 8.61 percentage points on Macro average F1-score.

Table and Figures | Reference | Related Articles | Metrics
Short text sentiment analysis based on parallel hybrid neural network model
CHEN Jie, SHAO Zhiqing, ZHANG Huanhuan, FEI Jiahui
Journal of Computer Applications    2019, 39 (8): 2192-2197.   DOI: 10.11772/j.issn.1001-9081.2018122552
Abstract765)      PDF (884KB)(405)       Save
Concerning the problems that the traditional Convolutional Neural Network (CNN) ignores the contextual semantics of words when performing sentiment analysis tasks and CNN loses a lot of feature information during max pooling operation at the pooling layer, which limit the text classification performance of model, a parallel hybrid neural network model, namely CA-BGA (Convolutional Neural Network Attention and Bidirectional Gated Recurrent Unit Attention), was proposed. Firstly, a feature fusion method was adopted to integrate Bidirectional Gated Recurrent Unit (BiGRU) into the output of CNN, thus semantic learning was enhanced by integrating the global semantic features of sentences. Then, the attention mechanism was introduced between the convolutional layer and the pooling layer of CNN and at the output of BiGRU to reduce noise interference while retaining more feature information. Finally, a parallel hybrid neural network model was constructed based on the above two improvement strategies. Experimental results show that the proposed hybrid neural network model has the characteristic of fast convergence, and effectively improves the F1 value of text classification. The proposed model has excellent performance in Chinese short text sentiment analysis tasks.
Reference | Related Articles | Metrics
Processing method of INS/GPS information delay based on factor graph algorithm
GAO Junqiang, TANG Xiaqing, ZHANG Huan, GUO Libin
Journal of Computer Applications    2018, 38 (11): 3342-3347.   DOI: 10.11772/j.issn.1001-9081.2018040814
Abstract903)      PDF (963KB)(613)       Save
Aiming at the problem of the poor real-time performance of Inertial Navigation System (INS)/Global Positioning System (GPS) integrated navigation system caused by GPS information delay, a processing method which takes advantage of dealing with various asynchronous measurements at an information fusion time in factor graph algorithm was proposed. Before the system received GPS information, the factor nodes of the INS information were added to the factor graph model, and the integrated navigation results were obtained by incremental inference to ensure the real-time performance of the system. After the system received the GPS information, the factor nodes about the GPS information were added to the factor graph model to correct the INS error, thereby ensuring high-precision operation of the system for a long time. The simulation results show that, the navigation state that has just been updated by GPS information can correct the INS error effectively, when the correction effect of real-time navigation state on INS error becomes worse, as the time of GPS information delay becomes longer. The factor graph algorithm avoids the adverse effects of GPS information delay on the real-time performance of INS/GPS integrated navigation system, and ensures the accuracy of the system.
Reference | Related Articles | Metrics
Deep sparse auto-encoder method using extreme learning machine for facial features
ZHANG Huanhuan, HONG Min, YUAN Yubo
Journal of Computer Applications    2018, 38 (11): 3193-3198.   DOI: 10.11772/j.issn.1001-9081.2018041274
Abstract457)      PDF (1002KB)(328)       Save
Focused on the problem of low recognition in recognition systems caused by the inaccuracy of input features, an efficient Deep Sparse Auto-Encoder (DSAE) method using Extreme Learning Machine (ELM) for facial features was proposed. Firstly, truncated nuclear norm was used to construct loss function, and sparse features of face images were extracted by minimizing loss function. Secondly, self-encoding of facial features was used by Extreme Learning Machine Auto-Encoder (ELM-AE) model to achieve data dimension reduction and noise filtering. Thirdly, the optimal depth structure was obtained by minimizing the empirical risk. The experimental results on ORL, IMM, Yale and UMIST datasets show that the DSAE method not only has higher recognition rate than ELM, Random Forest (RF), etc. on high-dimensional face images, but also has good generalization performance.
Reference | Related Articles | Metrics
Density-sensitive clustering by data competition algorithm
SU Hui, GE Hongwei, ZHANG Huanqing, YUAN Yunhao
Journal of Computer Applications    2015, 35 (2): 444-447.   DOI: 10.11772/j.issn.1001-9081.2015.02.0444
Abstract430)      PDF (606KB)(407)       Save

Since the clustering by data competition algorithm has poor performance on complex datasets, a density-sensitive clustering by data competition algorithm was proposed. Firstly, the local distance was defined based on density-sensitive distance measure to describe the local consistency of data distribution. Secondly, the global distance was calculated based on local distance to describe the global consistency of data distribution and dig the information of data space distribution, which can make up for the defect of Euclidean distance on describing the global consistency of data distribution. Finally, the global distance was used in clustering by data competition algorithm. Using synthetic and real life datasets, the comparison experiments were conducted on the proposed algorithm and the original clustering by data competition based on Euclidean distance. The simulation results show that the proposed algorithm can obtain better performance in clustering accuracy rate and overcome the defect that clustering by data competition algorithm is difficult to handle complex datasets.

Reference | Related Articles | Metrics
Research on scanning strategy of DDoS attack in hybrid networks
Kai ZHANG Huan-yan QIAN Yan-gui XU
Journal of Computer Applications    2009, 29 (11): 2964-2968.  
Abstract1343)      PDF (1060KB)(1203)       Save
The technology of Network Adress Translator (NAT) is widely used in the Internet. With this technology, computers set behind the NAT are separated to the external net. Attacker can hardly find and invade those computer behind the NAT by the conventional technique. Some principles of DDoS attack were briefly introduced and a concrete analysis about the effect of NAT on DDoS attack was given. To overcome the weakness of traditional mode in describing the propagation of DDoS attack, a new scanning strategy based on the Teredo technology and search engines was presented. Attacker could more rapidly invade computers set behind the NAT and use those computers more efficiently to actualize the DDoS attack. Compared with the conventional invasive methods, the simulation results show that the new method is more effective and feasible.
Related Articles | Metrics
Three new techniques for knowledge discover by gene expression programming— transgene,overlapped gene expression and backtracking evolution
TANG Chang-jie,PENG Jing,ZHANG Huan,ZHONG Yi-xiao
Journal of Computer Applications    2005, 25 (09): 1978-1981.   DOI: 10.3724/SP.J.1087.2005.01978
Abstract1092)      PDF (227KB)(1080)       Save
Three new technologies ware introduced by the authors in the past year,i.e.:(a) TranGene technique.By injection gene segment,it guides the evolution direction,controls knowledge discover process.(b) Overlapped gene expression.It borrows the idea of overlap gene expression from biological study,introduces overlapped gene expression,and saves space for gene expression.(c) Backtracking evolution.It comes from atavism in biology and proposes the concept of backtracking GEP algorithms,designing geometric proportion increased checkpoint sequence and accelerated increased checkpoint sequence to restrict the backtracking process. Experiments show that all three techniques respectively boost the performance of GEP by one or two magnitudes.
Related Articles | Metrics